To date, attribute discretization is typically performed by replacing the original set of continuous features with a\ntransposed set of discrete ones. This paper provides support for a new idea that discretized features should often be\nused in addition to existing features and as such, datasets should be extended, and not replaced, by discretization. We\nalso claim that discretization algorithms should be developed with the explicit purpose of enriching a non-discretized\ndataset with discretized values. We present such an algorithm, D-MIAT, a supervised algorithm that discretizes data\nbased on minority interesting attribute thresholds. D-MIAT only generates new features when strong indications exist\nfor one of the target values needing to be learned and thus is intended to be used in addition to the original data. We\npresent extensive empirical results demonstrating the success of using D-MIAT on 28 benchmark datasets. We also\ndemonstrate that 10 other discretization algorithms can also be used to generate features that yield improved\nperformance when used in combination with the original non-discretized data. Our results show that the best\npredictive performance is attained using a combination of the original dataset with added features from a ââ?¬Å?standardââ?¬Â\nsupervised discretization algorithm and D-MIAT.
Loading....